17 research outputs found

    A perspective on the extension of stochastic orderings to fuzzy random variables

    Get PDF
    International audienceIn this paper we study how to make joint extensions of stochastic orderings and interval orderings so as to extend methods for comparing random variables, from the point of view of their respective location or magnitude, to fuzzy random variables. The main idea is that the way fuzzy random variables are interpreted affects the choice of the comparison methods. We distinguish three views of fuzzy random variables, according to which various comparison methods seem to make sense. This paper offers an approach toward a systematic classification of combinations of stochastic and interval or fuzzy interval comparison methods

    Rough Sets, Coverings and Incomplete Information

    No full text
    International audienceRough sets are often induced by descriptions of objects based on the precise observations of an insufficient number of attributes. In this paper, we study generalizations of rough sets to incomplete information systems, involving imprecise observations of attributes. The precise role of covering-based approximations of sets that extend the standard rough sets in the presence of incomplete information about attribute values is described. In this setting, a covering encodes a set of possible partitions of the set of objects. A natural semantics of two possible generalisations of rough sets to the case of a covering (or a non transitive tolerance relation) is laid bare. It is shown that uncertainty due to granularity of the description of sets by attributes and uncertainty due to incomplete information are superposed, whereby upper and lower approximations themselves (in Pawlak's sense) become ill-known, each being bracketed by two nested sets. The notion of measure of accuracy is extended to the incomplete information setting, and the generalization of this construct to fuzzy attribute mappings is outlined

    Belief Revision and the EM Algorithm

    Get PDF
    International audienceThis paper provides a natural interpretation of the EM algorithm as a succession of revision steps that try to find a probability distribution in a parametric family of models in agreement with frequentist observations over a partition of a domain. Each step of the algorithm corresponds to a revision operation that respects a form of minimal change. In particular, the so-called expectation step actually applies Jeffrey’s revision rule to the current best parametric model so as to respect the frequencies in the available data. We also indicate that in the presence of incomplete data, one must be careful in the definition of the likelihood function in the maximization step, which may differ according to whether one is interested by the precise modeling of the underlying random phenomenon together with the imperfect observation process, or by the modeling of the underlying random phenomenon alone, despite imprecision

    Peakedness and generalized entropy for continuous density functions

    No full text
    Also part of the Lecture Notes in Artificial Intelligence book sub series (LNAI, volume 6178)International audienceThe theory of ma jorisation between real vectors with equal sum of components, originated in the beginning of the XXth century, enables a partial ordering between discrete probability distributions to be defined. It corresponds to comparing, via fuzzy set inclusion, possibility distributions that are the most specific transforms of the original probability distributions. This partial ordering compares discrete probability distributions in terms of relative peakedness around their mode, and entropy is monotonic with respect to this partial ordering. In fact, all known variants of entropy share this monotonicity. In this paper, this question is studied in the case of unimodal continuous probability densities on the real line, for which a possibility transform around the mode exists. It corresponds to extracting the family of most precise prediction intervals. Comparing such prediction intervals for two densities yields a variant of relative peakedness in the sense of Birnbaum. We show that a generalized form of continuous entropy is monotonic with respect to this form of relative peakedness of densities

    Extreme points of credal sets generated by 2-alternating capacities

    Get PDF
    AbstractThe characterization of the extreme points constitutes a crucial issue in the investigation of convex sets of probabilities, not only from a purely theoretical point of view, but also as a tool in the management of imprecise information. In this respect, different authors have found an interesting relation between the extreme points of the class of probability measures dominated by a second order alternating Choquet capacity and the permutations of the elements in the referential. However, they have all restricted their work to the case of a finite referential space. In an infinite setting, some technical complications arise and they have to be carefully treated. In this paper, we extend the mentioned result to the more general case of separable metric spaces. Furthermore, we derive some interesting topological properties about the convex sets of probabilities here investigated. Finally, a closer look to the case of possibility measures is given: for them, we prove that the number of extreme points can be reduced even in the finite case

    Learning from imprecise examples with GA-P algorithms

    No full text
    GA-P algorithms combine genetic programming and genetic algorithms to solve symbolic regression problems. In this work, we will learn a model by means of an interval GA-P procedure which can use precise or imprecise examples. This method provides us with an analytic expression that shows the dependence between input and output variables, using interval arithmetic. The method also provides us with interval estimations of the parameters on which this expression depends. The algorithm that we propose has been tested in a practical problem related to electrical engineering. We will obtain an expression of the length of the low voltage electrical line in some spanish villages as a function of their area and their number of inhabitants. The obtained model is compared to statistical regression-based, neural network, fuzzy rule-based and genetic programming-based models

    A survey of concepts of independence for imprecise probabilities

    No full text
    Our aim in this paper is to clarify the notion of independence for imprecise probabilities. Suppose that two marginal experiments are each described by an imprecise probability model, i.e., by a convex set of probability distributions or an equivalent model such as upper and lower probabilities or previsions. Then there are several ways to define independence of the two experiments and to construct an imprecise probability model for the joint experiment. We survey and compare six definitions of independence. To clarify the meaning of the definitions and the relationships between them, we give simple examples which involve drawing balls from urns. For each concept of independence, we give a mathematical definition, an intuitive or behavioural interpretation, assumptions under which the definition is justified, and an example of an urn model to which the definition is applicable. Each of the independence concepts we study appears to be useful in some kinds of application. The concepts of strong independence and epistemic independence appear to be the most frequently applicable.
    corecore